81 research outputs found

    Urban sustainability, agglomeration forces and the technological deus ex machina

    Get PDF
    This paper addresses the issue of spatial environmental externalities from a spatial general equilibrium perspective. We present a general equilibrium model of an island economy, with one city and a rural hinterland. Apart from market-internal interactions such as those governing trade and locational choice, the small economy considered encompasses a number of spatial external effects. Agglomeration externalities explain why industrial production is concentrated in a city, where land prices are higher than elsewhere. Industrial production however leads to environmental pollution, which negatively affects the quality of life within the city and rural agricultural production elsewhere. The paper explores the welfare properties of market-based spatial general equilibria compared to efficient configurations. The conclusions allow a welfare economic assessment of the currently popular concept of ‘ecological footprints’ from a spatial general equilibrium viewpoint.

    Energy policies in spatial systems: A spatial price equilibrium approach with heterogeneous regions and endogenous technologies

    Get PDF
    This paper presents a framework for analysing spatial aspects of environmental policies in the regulation of trans-boundary externalities. A spatial price equilibrium model for two regions is constructed, where interactions between these regions can occur via trade and transport, via mutual environmental spill-overs due to the externality that arises from production, and via tax competition when the regions do not behave cooperatively. Explicit attention is also given to the additional complications arising from emissions caused by the endogenous transport flows.

    De Maatschappelijke haalbaarheid van rekeningrijden

    Get PDF

    Sustainable mobi;ity

    Get PDF

    On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models

    Full text link
    This study investigates the effects of Markov chain Monte Carlo (MCMC) sampling in unsupervised Maximum Likelihood (ML) learning. Our attention is restricted to the family of unnormalized probability densities for which the negative log density (or energy function) is a ConvNet. We find that many of the techniques used to stabilize training in previous studies are not necessary. ML learning with a ConvNet potential requires only a few hyper-parameters and no regularization. Using this minimal framework, we identify a variety of ML learning outcomes that depend solely on the implementation of MCMC sampling. On one hand, we show that it is easy to train an energy-based model which can sample realistic images with short-run Langevin. ML can be effective and stable even when MCMC samples have much higher energy than true steady-state samples throughout training. Based on this insight, we introduce an ML method with purely noise-initialized MCMC, high-quality short-run synthesis, and the same budget as ML with informative MCMC initialization such as CD or PCD. Unlike previous models, our energy model can obtain realistic high-diversity samples from a noise signal after training. On the other hand, ConvNet potentials learned with non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images. We show that it is much harder to train a ConvNet potential to learn a steady-state over realistic images. To our knowledge, long-run MCMC samples of all previous models lose the realism of short-run samples. With correct tuning of Langevin noise, we train the first ConvNet potentials for which long-run and steady-state MCMC samples are realistic images.Comment: Code available at: https://github.com/point0bar1/ebm-anatom

    BigIssue: A Realistic Bug Localization Benchmark

    Full text link
    As machine learning tools progress, the inevitable question arises: How can machine learning help us write better code? With significant progress being achieved in natural language processing with models like GPT-3 and Bert, the applications of natural language processing techniques to code are starting to be explored. Most of the research has been focused on automatic program repair (APR), and while the results on synthetic or highly filtered datasets are promising, such models are hard to apply in real-world scenarios because of inadequate bug localization. We propose BigIssue: a benchmark for realistic bug localization. The goal of the benchmark is two-fold. We provide (1) a general benchmark with a diversity of real and synthetic Java bugs and (2) a motivation to improve bug localization capabilities of models through attention to the full repository context. With the introduction of BigIssue, we hope to advance the state of the art in bug localization, in turn improving APR performance and increasing its applicability to the modern development cycle
    • …
    corecore